exploiting tractable substructure
Exploiting Tractable Substructures in Intractable Networks
We develop a refined mean field approximation for inference and learning in probabilistic neural networks. Our mean field theory, unlike most, does not assume that the units behave as independent degrees of freedom; instead, it exploits in a principled way the existence of large substructures that are computationally tractable. To illustrate the advantages of this framework, we show how to incorporate weak higher order interactions into a first-order hidden Markov model, treating the corrections (but not the first order structure) within mean field theory.
Exploiting Tractable Substructures in Intractable Networks
Saul, Lawrence K., Jordan, Michael I.
We develop a refined mean field approximation for inference and learning in probabilistic neural networks. Our mean field theory, unlike most, does not assume that the units behave as independent degrees of freedom; instead, it exploits in a principled way the existence of large substructures that are computationally tractable. To illustrate the advantages of this framework, we show how to incorporate weak higher order interactions into a first-order hidden Markov model, treating the corrections (but not the first order structure) within mean field theory. 1 INTRODUCTION Learning the parameters in a probabilistic neural network may be viewed as a problem in statistical estimation.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.15)
- Asia > Middle East > Jordan (0.07)
- North America > United States > Hawaii (0.04)
- (2 more...)
Exploiting Tractable Substructures in Intractable Networks
Saul, Lawrence K., Jordan, Michael I.
We develop a refined mean field approximation for inference and learning in probabilistic neural networks. Our mean field theory, unlike most, does not assume that the units behave as independent degrees of freedom; instead, it exploits in a principled way the existence of large substructures that are computationally tractable. To illustrate the advantages of this framework, we show how to incorporate weak higher order interactions into a first-order hidden Markov model, treating the corrections (but not the first order structure) within mean field theory. 1 INTRODUCTION Learning the parameters in a probabilistic neural network may be viewed as a problem in statistical estimation.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.15)
- Asia > Middle East > Jordan (0.07)
- North America > United States > Hawaii (0.04)
- (2 more...)
Exploiting Tractable Substructures in Intractable Networks
Saul, Lawrence K., Jordan, Michael I.
We develop a refined mean field approximation for inference and learning in probabilistic neural networks. Our mean field theory, unlike most, does not assume that the units behave as independent degrees of freedom; instead, it exploits in a principled way the existence of large substructures that are computationally tractable. To illustrate the advantages of this framework, we show how to incorporate weak higher order interactions into a first-order hidden Markov model, treating the corrections (but not the first order structure) within mean field theory. 1 INTRODUCTION Learning the parameters in a probabilistic neural network may be viewed as a problem in statistical estimation.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.15)
- Asia > Middle East > Jordan (0.07)
- North America > United States > Hawaii (0.04)
- (2 more...)